
Image by Lahiru (Adobe Stock)
Transformers are an extraordinarily powerful computational architecture, applicable across a range of domains. They are, notably, the computational foundation of contemporary Large Language Models (LLMs). LLMs’ facility with language have led many to draw analogies between LLMs and human cognitive processing. Colin argues that transformers are broad precisely because they have so little built-in representational structure. This naturally raises questions about the need for structured representations and what (if any) advantage they might have over mere representation of structure. He develops this in particular in the context of the contemporary revival of the Language of Thought hypothesis.
Colin Klein is a Professor in the School of Philosophy at the Australian National University. His main interests include philosophy of cognitive neuroscience, pain perception, consciousness, the metaphysics of computation, the evolution of cognition, and social epistemology.
Location
Speakers
- Professor Colin Klein (ANU)
Event Series
Contact
- Alexandre Duval